29 research outputs found

    Portallax:bringing 3D displays capabilities to handhelds

    Get PDF
    We present Portallax, a clip-on technology to retrofit mobile devices with 3D display capabilities. Available technologies (e.g. Nintendo 3DS or LG Optimus) and clip-on solutions (e.g. 3DeeSlide and Grilli3D) force users to have a fixed head and device positions. This is contradictory to the nature of a mobile scenario, and limits the usage of interaction techniques such as tilting the device to control a game. Portallax uses an actuated parallax barrier and face tracking to realign the barrier's position to the user's position. This allows us to provide stereo, motion parallax and perspective correction cues in 60 degrees in front of the device. Our optimized design of the barrier minimizes colour distortion, maximizes resolution and produces bigger view-zones, which support ~81% of adults' interpupillary distances and allow eye tracking implemented with the front camera. We present a reference implementation, evaluate its key features and provide example applications illustrating the potential of Portallax

    OpenMPD: A Low-Level Presentation Engine for Multimodal Particle-Based Displays

    Get PDF
    Phased arrays of transducers have been quickly evolving in terms of software and hardware with applications in haptics (acoustic vibrations), display (levitation), and audio. Most recently, Multimodal Particle-based Displays (MPDs) have even demonstrated volumetric content that can be seen, heard, and felt simultaneously, without additional instrumentation. However, current software tools only support individual modalities and they do not address the integration and exploitation of the multi-modal potential of MPDs. This is because there is no standardized presentation pipeline tackling the challenges related to presenting such kind of multi-modal content (e.g., multi-modal support, multi-rate synchronization at 10 KHz, visual rendering or synchronization and continuity). This article presents OpenMPD, a low-level presentation engine that deals with these challenges and allows structured exploitation of any type of MPD content (i.e., visual, tactile, audio). We characterize OpenMPD’s performance and illustrate how it can be integrated into higher-level development tools (i.e., Unity game engine). We then illustrate its ability to enable novel presentation capabilities, such as support of multiple MPD contents, dexterous manipulations of fast-moving particles, or novel swept-volume MPD content

    Enhancing interactivity with transcranial direct current stimulation

    Get PDF
    Transcranial Direct Current Stimulation (tDCS) is a non-invasive type of neural stimulation known for modulation of cortical excitability leading to positive effects on working memory and attention. The availability of low-cost and consumer grade tDCS has democratized access to such devices allowing us to explore its applicability to HCI. We review the relevant literature and identify potential avenues for exploration within the context of enhancing interactivity and use of tDCS in the context of HCI

    High-speed acoustic holography with arbitrary scattering objects

    Get PDF
    Recent advances in high-speed acoustic holography have enabled levitation-based volumetric displays with tactile and audio sensations. However, current approaches do not compute sound scattering of objects’ surfaces; thus, any physical object inside can distort the sound field. Here, we present a fast computational technique that allows high-speed multipoint levitation even with arbitrary sound-scattering surfaces and demonstrate a volumetric display that works in the presence of any physical object. Our technique has a two-step scattering model and a simplified levitation solver, which together can achieve more than 10,000 updates per second to create volumetric images above and below static sound-scattering objects. The model estimates transducer contributions in real time by reformulating the boundary element method for acoustic holography, and the solver creates multiple levitation traps. We explain how our technique achieves its speed with minimum loss in the trap quality and illustrate how it brings digital and physical content together by demonstrating mixed-reality interactive applications

    Content Rendering for Acoustic Levitation Displays via Optimal Path Following

    Get PDF
    Recently, volumetric displays based on acoustic levitation have demonstrated the capability to produce mid-air content using the Persistence of Vision (PoV) effect. In these displays, acoustic traps are used to rapidly move a small levitated particle along a prescribed path. This note is based on our recent work OptiTrap (Paneva et al., 2022), the first structured numerical approach for computing trap positions and timings via optimal control to produce feasible and (nearly) time-optimal trajectories that reveal generic levitated graphics. While previously, feasible trap trajectories needed to be tuned manually for each shape and levitator, relying on trial and error, OptiTrap automates this process by allowing for a systematic exploration of the range of contents that a given levitation display can render. This represents a crucial milestone for future content authoring tools for acoustic levitation displays and advances volumetric displays closer toward real-world applications

    DATALEV: Acoustophoretic Data Physicalisation

    Get PDF
    Here, we demonstrate DataLev, a data physicalisation platform with a physical assembly pipeline that allows us to computationally assemble 3D physical charts using acoustically levitated contents. DataLev consists of several enhancement props that allow us to incorporate high-resolution projection, different 3D printed artifacts and multi-modal interaction. DataLev supports reconfigurable and dynamic physicalisations that we animate and illustrate for different chart types. Our work opens up new opportunities for data storytelling using acoustic levitation

    Multi-point STM: Effects of Drawing Speed and Number of Focal Points on Users’ Responses using Ultrasonic Mid-Air Haptics

    Get PDF
    Spatiotemporal modulation (STM) is used to render tactile patterns with ultrasound arrays. Previous research only explored the effects of single-point STM parameters, such as drawing speed (Vd). Here we explore the effects of multi-point STM on both perceptual (intensity) and emotional (valence/arousal) responses. This introduces a new control parameter for STM - the number of focal points (Nfp) – on top of conventional STM parameter (Vd). Our results from a study with 30 participants showed a negative effect of Nfp on perceived intensity and arousal, but no significant effects on valence. We also found the effects of Vd still aligned with prior results for single-point, even when different Nfp were used, suggesting that effects observed from single-point also apply to multi-point STM. We finally derive recommendations, such as using single-point STM to produce stimuli with higher intensity and/or arousal, or using multi-point STM for milder and more relaxing (less arousing) experience

    DataLev: Mid-air Data Physicalisation Using Acoustic Levitation

    Get PDF
    Data physicalisation is a technique that encodes data through the geometric and material properties of an artefact, allowing users to engage with data in a more immersive and multi-sensory way. However, current methods of data physicalisation are limited in terms of their reconfgurability and the types of materials that can be used. Acoustophoresis—a method of suspending and manipulating materials using sound waves—ofers a promising solution to these challenges. In this paper, we present DataLev, a design space and platform for creating reconfgurable, multimodal data physicalisations with enriched materiality using acoustophoresis. We demonstrate the capabilities of DataLev through eight examples and evaluate its performance in terms of reconfgurability and materiality. Our work ofers a new approach to data physicalisation, enabling designers to create more dynamic, engaging, and expressive artefacts
    corecore